cover
Contact Name
Yuhefizar
Contact Email
jurnal.resti@gmail.com
Phone
+628126777956
Journal Mail Official
ephi.lintau@gmail.com
Editorial Address
Politeknik Negeri Padang, Kampus Limau Manis, Padang, Indonesia.
Location
,
INDONESIA
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi)
ISSN : 25800760     EISSN : 25800760     DOI : https://doi.org/10.29207/resti.v2i3.606
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) dimaksudkan sebagai media kajian ilmiah hasil penelitian, pemikiran dan kajian analisis-kritis mengenai penelitian Rekayasa Sistem, Teknik Informatika/Teknologi Informasi, Manajemen Informatika dan Sistem Informasi. Sebagai bagian dari semangat menyebarluaskan ilmu pengetahuan hasil dari penelitian dan pemikiran untuk pengabdian pada Masyarakat luas dan sebagai sumber referensi akademisi di bidang Teknologi dan Informasi. Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) menerima artikel ilmiah dengan lingkup penelitian pada: Rekayasa Perangkat Lunak Rekayasa Perangkat Keras Keamanan Informasi Rekayasa Sistem Sistem Pakar Sistem Penunjang Keputusan Data Mining Sistem Kecerdasan Buatan/Artificial Intelligent System Jaringan Komputer Teknik Komputer Pengolahan Citra Algoritma Genetik Sistem Informasi Business Intelligence and Knowledge Management Database System Big Data Internet of Things Enterprise Computing Machine Learning Topik kajian lainnya yang relevan
Articles 21 Documents
Search results for , issue "Vol 8 No 1 (2024): February 2024" : 21 Documents clear
Improving Algorithm Performance using Feature Extraction for Ethereum Forecasting Indri Tri Julianto; Dede Kurniadi; Ricky Rohmanto; Fathia Alisha Fauzia
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 8 No 1 (2024): February 2024
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v8i1.4872

Abstract

Ethereum is a cryptocurrency that is now the second most popular digital asset after Bitcoin. High trading volume is the trigger for the popularity of this cryptocurrency. In addition, Ethereum is home to various decentralized applications and acts as a link for Decentralized Finance (DeFi) transactions, Non-Fungible Tokens (NFTs) and the use of smart contracts in the crypto space. This study aims to improve the performance of the forecasting algorithm by using feature extraction for Ethereum price forecasting. The algorithms used are neural networks, deep learning, and support vector machines. The research methodology used is Knowledge Discovery in Databases. The data set used comes from the yahoo.finance.com website regarding Ethereum prices. The results show that the neural network Algorithm is the best Algorithm compared to Deep Learning and support vector machine. The root mean square error value for the neural network before feature selection is 93,248 +/- 168,135 (micro average: 186,580 +/- 0,000) Linear Sampling method and 54,451 +/- 26,771 (micro average: 60,318 +/- 0,000) Shuffled Sampling method. Then after feature selection, the root mean square error value improved to 38,102 +/- 31,093 (micro average: 48,600 +/- 0,000) using the Shuffled Sampling method
Development of an Early Warning System Using Social Media for Flood Disaster I Ketut Kasta Arya Wijaya; Ruben Cornelius Siagian
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 8 No 1 (2024): February 2024
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v8i1.5087

Abstract

This research paper introduces an innovative prototype system that uses IoT technologies to monitor floodwater levels. Integration of an ultrasonic sensor, ESP8266 microcontroller, Arduino IDE, and the ThingSpeak platform aims to establish a robust flood monitoring solution. The paper provides a thorough exploration of the system's background, the problem it addresses, the methodology employed, and the obtained results, along with insights into future research directions. The study meticulously describes the design, implementation and programming code for data collection and transmission within the system. Through extensive field testing and meticulous data analysis, the paper evaluates the precision and effectiveness of the proposed flood monitoring solution. In particular, research underscores the advantages of IoT, emphasizing real-time data collection, logging, and analysis as essential components for efficient flood management. Additionally, the paper elucidates step-by-step instructions for configuring Telegram notifications through the ThingSpeak React app, enhancing the practical applicability of the developed system. The research effectively highlights the potential of IoT in flood monitoring, showcasing its superior accuracy and effectiveness compared to traditional methods. By demonstrating the feasibility and advantages of IoT in the context of flood monitoring, this study contributes valuable information, enriching existing knowledge, and paving the way for future advances in the field. Research encourages the continued exploration of advanced techniques to strengthen flood monitoring and management strategies. Ultimately, this work presents a comprehensive IoT-based prototype for floodwater monitoring, offering valuable information and fostering the promising role of IoT technologies in this critical domain.
Kmeans-SMOTE Integration for Handling Imbalance Data in Classifying Financial Distress Companies using SVM and Naïve Bayes Didit Johar Maulana; Siti Saadah; Prasti Eko Yunanto
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 8 No 1 (2024): February 2024
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v8i1.5140

Abstract

Imbalanced data presents significant challenges in machine learning, leading to biased classification outcomes that favor the majority class. This issue is especially pronounced in the classification of financial distress, where data imbalance is common due to the scarcity of such instances in real-world datasets. This study aims to mitigate data imbalance in financial distress companies using the Kmeans-SMOTE method by combining Kmeans clustering and the synthetic minority oversampling technique (SMOTE). Various classification approaches, including Nave Bayes and support vector machine (SVM), are implemented on a Kaggle financial distress data set to evaluate the effectiveness of Kmeans-SMOTE. Experimental results show that SVM outperforms Nave Bayes with impressive accuracy (99.1%), f1-score (99.1%), area under precision recall (AUPRC) (99.1%), and geometric mean (Gmean) (98.1%). On the basis of these results, Kmeans-SMOTE can balance the data effectively, leading to a quite significant improvement in performance.
The Design of a C1 Document Data Extraction Application Using a Tesseract-Optical Character Recognition Engine Ircham Aji Nugroho; Bety Hayat Susanti; Mareta Wahyu Ardyani; Nadia Paramita R.A.
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 8 No 1 (2024): February 2024
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v8i1.5151

Abstract

The 2019 election process used the Vote Counting Information System, also known as Sistem Informasi Penghitungan Suara (Situng), to provide transparency in the recapitulation process. The data displayed in Situng is from document C1 for 813,336 voting stations in Indonesia. The data collected from the C1 document is entered and uploaded into Situng by the officers of the Municipal General Election Commission (GEC). Since this process is performed by humans, it is not immune to errors. In the recapitulation process of the 2019 election results, there were 269 data entry errors, and the data entry process also did not run according to the specified target, resulting in delays. Furthermore, there were cases of C1 document modification, raising concerns about the data's authenticity. To avoid human errors and increase data entry speed, automatic data entry is a plausible option. The data entered are text data in image documents with the same template format, so that optical character recognition (OCR) can be used to read the text while improving image quality and alignment, resulting in a more accurate OCR reading area. In this study, we developed a C1 document data extraction application using the waterfall SDLC method, which has undergone a systematic and thorough process. The application was developed using Tesseract optical character recognition. Tesseract is an open-source OCR engine and command-line program that allows for the recognition of text characters within a digital image. The accuracy obtained by using this method is still not optimal as a substitute for Situng's data entry officer. To guarantee the integrity of the C1 document, we use the RSA-2048 digital signature scheme. The use of the Tesseract-OCR Engine for character recognition, combined with digital signature capabilities, provides a comprehensive solution to reduce the human error factor that can lead to miscalculations and inaccurate processes.
Sistem Pemantauan dan Pengendalian Logistik Buah Mangga Berbasiskan Machine Learning Buyung Hardyansyah; Heru Sukoco; Sony Hartono Wijaya
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 8 No 1 (2024): February 2024
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v8i1.5226

Abstract

Fruits are highly perishable goods, which means that they have a short shelf life and can pose significant challenges in trade. A lengthy supply chain can trigger the process of fruit spoilage. The logistics environment, both internal and external, can also affect the decline in the quality of goods. One common issue faced by producers is the variability in consumer demand for fruit quality. To address this problem, a machine learning-based logistics monitoring and recommendation system can be developed, utilizing the Long Short-Term Memory (LSTM) and Decision Tree algorithms. By utilizing machine learning algorithms, the system can analyze data from the Internet of Things (IoT) device-equipped sensors, such as temperature sensors and humidity sensors to identify potential issues in the supply chain and provide recommendations for optimizing logistics operations. In this study, a machine learning-based monitoring system is developed for monitoring the shelf-life of perishable goods, with a specific focus on mango fruit. The system utilizes LSTM for predicting mango ripeness and decision tree algorithms for recommending fruit ripeness. The objective is to provide producers with recommendations that optimize the logistics process for high-quality mangoes and fulfil consumer demands for fruit quality. The implementation of a logistics monitoring and recommendation system based on machine learning can provide significant benefits for mango producers. By leveraging advanced technologies such as LSTM and Decision Tree algorithms, producers can optimize their logistics operations, improve fruit quality, reduce waste, and enhance customer satisfaction.
Investigating the Impact of ReLU and Sigmoid Activation Functions on Animal Classification Using CNN Models M Mesran; Sitti Rachmawati Yahya; Fifto Nugroho; Agus Perdana Windarto
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 8 No 1 (2024): February 2024
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v8i1.5367

Abstract

VGG16 is a convolutional neural network model used for image recognition. It is unique in that it only has 16 weighted layers, rather than relying on a large number of hyperparameters. It is considered as one of the best vision model architectures. This study compares the performance of ReLU (rectified linear unit) and sigmoid activation functions in CNN models for animal classification. To choose which model to use, we tested 2 state-of-the-art CNN architectures: the default VGG16 with the proposed method VGG16. A data set consisting of 2,000 images of five different animals was used. The results show that ReLU achieves higher classification accuracy than sigmoid. The model with ReLU on convolutional and fully connected layers achieved the highest accuracy of 97.56% on the test dataset. However, further experiments and considerations are needed to improve the results. Research aims to find better activation functions and identify factors that influence model performance. The data set consists of animal images collected from Kaggle, including cats, cows, elephants, horses, and sheep. It is divided into training and test sets (ratio 80:20). The CNN model has two convolution layers and two fully connected layers. ReLU and sigmoid activation functions with different learning rates are used. Evaluation metrics include precision, precision, recall, F1 score, and test cost. ReLU outperforms sigmoid in accuracy, precision, recall, and F1 score. However, other factors such as the size, complexity and parameters of the data set must be taken into account. This study emphasizes the importance of choosing the right activation function for better classification accuracy. ReLU is identified as effective in solving the vanish gradient problem. These findings can guide future research to improve CNN models in animal classification.
Anatomy Identification of Bamboo Stems with The Convolutional Neural Networks (CNN) Method Dede Rustandi; Sony Hartono Wijaya; Mushthofa; Ratih Damayanti
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 8 No 1 (2024): February 2024
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v8i1.5370

Abstract

It is important to note that some species of bamboo are protected and considered endangered. However, distinguishing between traded and protected bamboo species or differentiating between bamboo species for various purposes remains a challenge. This requires specialized skills to identify the type of bamboo, and currently, the process can only be carried out in the forest for bamboo that is still in clump form by experienced researchers or officers. However, a study has been conducted to develop an easier and faster method of identifying bamboo species. The study aims to create an automatic identification system for bamboo stems based on their anatomical structure (ASINABU). The bamboo identification algorithm was developed using macroscopic images of cross-sectioned bamboo stems and the research method used was the convolutional neural network (CNN). CNN was designed to identify bamboo species with images taken using a cellphone camera equipped with a lens. The final product is an Android automatic identification application that can detect bamboo species with an accuracy of 99.9%.
A Novel Framework for Information Security During the SDLC Implementation Stage: A Systematic Literature Review Mikael Octavinus Chan; Setiadi Yazid
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 8 No 1 (2024): February 2024
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v8i1.5403

Abstract

This research delves into the critical aspects of information security during the implementation stage of the Software Development Life Cycle (SDLC). By employing a systematic literature review, the study synthesizes findings from various digital repositories, including IEEE Xplore, ACM Digital Library, Scopus, and ScienceDirect, to outline a comprehensive framework addressing the implementation stage's unique security challenges. This research contributes to the field by proposing a novel assurance model for software development vendors that focuses on enhancing information security measures during the implementation stage. The study's findings reveal 12 key steps organizations can adopt to mitigate security risks and enhance information security measures during this critical phase. These steps provide actionable insights and strategies tailored to support security protocols effectively. The article concludes that by incorporating these steps, organizations can significantly improve their security posture, ensuring the integrity and reliability of the software development process, particularly during the implementation stage. This approach not only addresses immediate security concerns but also sets a precedent for future research and practice in secure software development, particularly in the critical implementation stage of the SDLC.
Image Preprocessing Approaches Toward Better Learning Performance with CNN Dhimas Tribuana; Hazriani; Abdul Latief Arda
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 8 No 1 (2024): February 2024
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v8i1.5417

Abstract

Convolutional neural networks (CNNs) are at the forefront of computer vision, relying heavily on the quality of input data determined by the preprocessing method. An undue preprocessing approach will result in poor learning performance. This study critically examines the impact of advanced image pre-processing techniques on computational neural networks (CNNs) in facial recognition. Emphasizing the importance of data quality, we explore various pre-processing approaches, including noise reduction, histogram equalization, and image hashing. Our methodology involves feature visualization to improve facial feature discernment, training convergence analysis, and real-time model testing. The results demonstrate significant improvements in model performance with the preprocessed dataset: average accuracy, recall, precision, and F1 score enhancements of 4.17%, 3.45%, 3.45%, and 3.81%, respectively. Additionally, real-time testing shows a 21% performance increase and a 1.41% reduction in computing time. This study not only underscores the effectiveness of preprocessing in boosting CNN capabilities, but also opens avenues for future research in applying these methods to diverse image types and exploring various CNN architectures for comprehensive understanding.
Data Mining Techniques for Predictive Classification of Anemia Disease Subtypes Johan Setiawan; Dita Amalia; Iwan Prasetiawan
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 8 No 1 (2024): February 2024
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v8i1.5445

Abstract

Anemia, characterized by insufficient red blood cells or reduced hemoglobin, hinders oxygen transport in the body. Understanding the different types of anemia is vital to tailor effective prevention and treatment. This research explores the role of data mining in predicting and classifying anemia types, focusing on complete blood count (CBC) and demographic data. Data mining is the key to building models that help healthcare professionals diagnose and treat anemia. Employing the cross-industry standard process for data mining (CRISP-DM), with its six phases, facilitates this effort. Our study compared Naïve Bayes, J48 Decision Tree, and Random Forest algorithms using RapidMiner tools, evaluating precision, mean recall, and mean precision. The J48 decision tree outperformed the others, highlighting the importance of algorithm choice in anemia classification models. Furthermore, our analysis identified renal disease-related and chronic anemia as the most prevalent types, with greater occurrence among females. Recognizing gender disparities in the prevalence of anemia informs customized healthcare decisions. Understanding demographic factors in specific types of anemia is crucial to effective care strategies.

Page 1 of 3 | Total Record : 21